Goto

Collaborating Authors

 boston consulting group


From RAGs to riches: Using large language models to write documents for clinical trials

Markey, Nigel, El-Mansouri, Ilyass, Rensonnet, Gaetan, van Langen, Casper, Meier, Christoph

arXiv.org Artificial Intelligence

Clinical trials require numerous documents to be written -- protocols, consent forms, clinical study reports and others. Large language models (LLMs) offer the potential to rapidly generate first versions of these documents, however there are concerns about the quality of their output. Here we report an evaluation of LLMs in generating parts of one such document, clinical trial protocols. We find that an offthe-shelf LLM delivers reasonable results, especially when assessing content relevance and the correct use of terminology. However, deficiencies remain: specifically clinical thinking and logic, and appropriate use of references. To improve performance, we used retrieval-augmented generation (RAG) to prompt an LLM with accurate up-to-date information. As a result of using RAG, the writing quality of the LLM improves substantially, which has implications for the practical useability of LLMs in clinical trial-related writing.


Generative AI – Astonishing but no reason to be afraid

#artificialintelligence

Generative AI (GenAI) is a form of artificial intelligence that can create novel content such as audio, data, codes, or images. It will drastically change how we approach digital content in the future, and a range of jobs are executed.[1] GenAI gained significant attention worldwide when the California-based company OpenAI launched its artificial intelligence chatbot ChatGPT 3.5 in November 2022. The chatbot gained over 100 million users within the first two months after release.[2] ChatGPT is a text-based machine learning model that consists of several language models and large amounts of public data, which allow it to create new content based on user instructions. Its capabilities range from giving information on quantum physics to writing poems that sound as if they are written by humans.[3]


White House Blueprint is the Starting Point for Building Responsible AI - Nextgov

#artificialintelligence

Late last year, White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights, instantly elevating the topic of responsible AI to the top of leadership agendas across executive branch agencies. While the themes of the blueprint are not entirely new--building on prior work including the AI in Government Act of 2020, a December 2020 executive order on trustworthy AI, and the Federal Privacy Council's Fair Information Practice Principles--the report brings new urgency to ongoing agency efforts to leverage data in ways consistent with our democratic ideals. With a stated goal of supporting "the development of policies and practices that protect civil rights and promote democratic values in the building, deployment and governance of automated systems," the blueprint is rooted in five principles: safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration and fallback. The Blueprint also includes notes on applying the principles and a technical companion to support operationalization. Some agencies that are less mature in their data capabilities might consider the blueprint to be of limited relevance.


Frank von Thun 🔴 on LinkedIn: #davos2023 #mbb #mckinsey #mckinseyatdavos #bcg #bcgatdavos…

#artificialintelligence

I have summarised for you the Key Takeaways of the MBBs: The latest summit in Davos brought attention to crucial topics for businesses today, with a focus on sustainability, digital transformation, and the future of work. Let's take a look at the key takeaways shared by McKinsey & Company, Boston Consulting Group, and Bain & Company. McKinsey & Company's key takeaways from the latest summit in Davos included the need to focus on sustainability, digital transformation, and the future of work, as well as addressing global inequality and the impact of the Covid-19 pandemic. Additionally, the importance of investing in new technologies, such as artificial intelligence, machine learning, and big data, was highlighted. Finally, the summit discussed the need for long-term strategic planning and the need for businesses to be agile and adaptive in order to remain competitive in the current global landscape.


Data Platforms and Network Effects

Communications of the ACM

Industry platforms are foundations that bring people and organizations together for a common purpose, which usually includes making money. They function at the level of a market or ecosystem, rather than only within a specific firm. They often start with products such as operating systems and microprocessors, services such as social media and messaging systems, or marketplaces for e-commerce and financial transactions. They can link thousands, millions, or even billions of users and other market actors. But another type of industry platform has recently received attention from consultants such as The Boston Consulting Group as well as investors, entrepreneurs, and policymakers.


The power of A.I. to help mitigate and manage climate change

#artificialintelligence

Just as artificial intelligence has improved the decisions organizations make to optimize financial performance, improve processes, meet customer needs, and more, it will be critical in helping them reach their climate goals. In fact, because it can gather, complete, and interpret large, complex datasets on emissions and climate impact, A.I. is fundamentally important in helping to manage the full range of climate-related issues. BCG recently conducted a global survey of 1,000 leaders in A.I. and climate that tells us more about that potential--as well as the barriers getting in the way. We found that 87% of respondents feel that advanced analytics and A.I., or simply "A.I.," is a helpful tool in the fight against climate change today, but only 43% say that they have a vision for using A.I. in their own climate change efforts. They see the greatest business value for AI in the reduction and measurement of emissions.


87% of climate and AI leaders believe that AI is critical in the fight against climate change

#artificialintelligence

DUBAI: Climate change will have significant impacts on environmental, social, political, and economic systems around the world. Climate change mitigation, along with adaptation and resilience, is therefore crucial. Efforts to achieve net-zero emissions by 2050 will be essential, as will efforts to prepare for the consequences of climate change and to minimize the resulting harm. Applying advanced analytics and artificial intelligence (AI) to climate challenges provides a vital way to make meaningful change at this critical moment. According to a new report from the AI for the Planet Alliance, produced in collaboration with Boston Consulting Group (BCG) and BCG GAMMA, 87% of public- and private-sector leaders who oversee climate and AI topics believe that AI is a valuable asset in the fight against climate change.


When choosing a responsible AI leader, tech skills matter

#artificialintelligence

Abishek Gupta is the founder and principal researcher at the Montreal AI Ethics Institute and senior Responsible AI leader and expert at Boston Consulting Group; Steven Mills is the Global GAMMA Chief AI Ethics Officer at Boston Consulting Group. The Responsible AI (RAI) domain is at an inflection point: We are moving decidedly from principles to practice. As organizations mature their understanding, they are feeling the pressure to act from customer demands and impending regulatory requirements. RAI means developing and operating artificial intelligence systems that align with organizational values and widely accepted standards of right and wrong while achieving transformative business impact. But successfully operationalizing RAI requires a leader with the right mix of knowledge, skills, abilities and experience, and RAI remains a nascent field.


Does Artificial Intelligence (AI) Need a Social Licence?

#artificialintelligence

According to the Boston Consulting Group (BCG), companies have no option but to acquire a social license for AI. When Mary Shelley wrote Frankenstein in 1818, she was writing about technology. Dr. Victor Frankenstein created a man who becomes a monster – leaping over his creator's expectations and terrifying the townspeople until his creator shuts him down. One wonders, if Frankenstein had been given the right circumstances, would the story have ended in triumph? Reading Boston Consulting Group's article "Why AI Needs a Social License" reminded me of Shelley's classic.


Should Organizations Link Responsible AI and Corporate Social Responsibility? It's Complicated.

#artificialintelligence

MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us gain insights into how responsible artificial intelligence (RAI) is being implemented in organizations worldwide. This month's question for our panelists: Should an organization tie its RAI efforts to its overall corporate social responsibility (CSR) efforts? The results present a mixed picture. While 52% of panelists (11 out of 21) believe that an organization's RAI and CSR efforts should be linked, 24% do not (5 out of 21 disagree or strongly disagree), and an equal percentage expressed ambivalence (5 out of 21 neither agree nor disagree). Despite the lack of consensus, there are some common characteristics among those who agree that organizations should link their RAI and CSR efforts, as well as some concerns shared among the remaining panelists.